545 research outputs found
Quantile-based bias correction and uncertainty quantification of extreme event attribution statements
Extreme event attribution characterizes how anthropogenic climate change may
have influenced the probability and magnitude of selected individual extreme
weather and climate events. Attribution statements often involve quantification
of the fraction of attributable risk (FAR) or the risk ratio (RR) and
associated confidence intervals. Many such analyses use climate model output to
characterize extreme event behavior with and without anthropogenic influence.
However, such climate models may have biases in their representation of extreme
events. To account for discrepancies in the probabilities of extreme events
between observational datasets and model datasets, we demonstrate an
appropriate rescaling of the model output based on the quantiles of the
datasets to estimate an adjusted risk ratio. Our methodology accounts for
various components of uncertainty in estimation of the risk ratio. In
particular, we present an approach to construct a one-sided confidence interval
on the lower bound of the risk ratio when the estimated risk ratio is infinity.
We demonstrate the methodology using the summer 2011 central US heatwave and
output from the Community Earth System Model. In this example, we find that the
lower bound of the risk ratio is relatively insensitive to the magnitude and
probability of the actual event.Comment: 28 pages, 4 figures, 3 table
Quantifying statistical uncertainty in the attribution of human influence on severe weather
Event attribution in the context of climate change seeks to understand the
role of anthropogenic greenhouse gas emissions on extreme weather events,
either specific events or classes of events. A common approach to event
attribution uses climate model output under factual (real-world) and
counterfactual (world that might have been without anthropogenic greenhouse gas
emissions) scenarios to estimate the probabilities of the event of interest
under the two scenarios. Event attribution is then quantified by the ratio of
the two probabilities. While this approach has been applied many times in the
last 15 years, the statistical techniques used to estimate the risk ratio based
on climate model ensembles have not drawn on the full set of methods available
in the statistical literature and have in some cases used and interpreted the
bootstrap method in non-standard ways. We present a precise frequentist
statistical framework for quantifying the effect of sampling uncertainty on
estimation of the risk ratio, propose the use of statistical methods that are
new to event attribution, and evaluate a variety of methods using statistical
simulations. We conclude that existing statistical methods not yet in use for
event attribution have several advantages over the widely-used bootstrap,
including better statistical performance in repeated samples and robustness to
small estimated probabilities. Software for using the methods is available
through the climextRemes package available for R or Python. While we focus on
frequentist statistical methods, Bayesian methods are likely to be particularly
useful when considering sources of uncertainty beyond sampling uncertainty.Comment: 41 pages, 11 figures, 1 tabl
Quantifying the effect of interannual ocean variability on the attribution of extreme climate events to human influence
In recent years, the climate change research community has become highly
interested in describing the anthropogenic influence on extreme weather events,
commonly termed "event attribution." Limitations in the observational record
and in computational resources motivate the use of uncoupled,
atmosphere/land-only climate models with prescribed ocean conditions run over a
short period, leading up to and including an event of interest. In this
approach, large ensembles of high-resolution simulations can be generated under
factual observed conditions and counterfactual conditions that might have been
observed in the absence of human interference; these can be used to estimate
the change in probability of the given event due to anthropogenic influence.
However, using a prescribed ocean state ignores the possibility that estimates
of attributable risk might be a function of the ocean state. Thus, the
uncertainty in attributable risk is likely underestimated, implying an
over-confidence in anthropogenic influence.
In this work, we estimate the year-to-year variability in calculations of the
anthropogenic contribution to extreme weather based on large ensembles of
atmospheric model simulations. Our results both quantify the magnitude of
year-to-year variability and categorize the degree to which conclusions of
attributable risk are qualitatively affected. The methodology is illustrated by
exploring extreme temperature and precipitation events for the northwest coast
of South America and northern-central Siberia; we also provides results for
regions around the globe. While it remains preferable to perform a full
multi-year analysis, the results presented here can serve as an indication of
where and when attribution researchers should be concerned about the use of
atmosphere-only simulations
The effect of geographic sampling on evaluation of extreme precipitation in high resolution climate models
Traditional approaches for comparing global climate models and observational
data products typically fail to account for the geographic location of the
underlying weather station data. For modern high-resolution models, this is an
oversight since there are likely grid cells where the physical output of a
climate model is compared with a statistically interpolated quantity instead of
actual measurements of the climate system. In this paper, we quantify the
impact of geographic sampling on the relative performance of high resolution
climate models' representation of precipitation extremes in Boreal winter (DJF)
over the contiguous United States (CONUS), comparing model output from five
early submissions to the HighResMIP subproject of the CMIP6 experiment. We find
that properly accounting for the geographic sampling of weather stations can
significantly change the assessment of model performance. Across the models
considered, failing to account for sampling impacts the different metrics
(extreme bias, spatial pattern correlation, and spatial variability) in
different ways (both increasing and decreasing). We argue that the geographic
sampling of weather stations should be accounted for in order to yield a more
straightforward and appropriate comparison between models and observational
data sets, particularly for high resolution models. While we focus on the CONUS
in this paper, our results have important implications for other global land
regions where the sampling problem is more severe
Do you agree? Contrasting Google's core web vitals and the impact of cookie consent banners with actual web QoE
Providing sophisticated web Quality of Experience (QoE) has become paramount for web service providers and network operators alike. Due to advances in web technologies (HTML5, responsive design, etc.), traditional web QoE models focusing mainly on loading times have to be refined and improved. In this work, we relate Google’s Core Web Vitals, a set of metrics for improving user experience, to the loading time aspects of web QoE, and investigate whether the Core Web Vitals and web QoE agree on the perceived experience. To this end, we first perform objective measurements in the web using Google’s Lighthouse. To close the gap between metrics and experience, we complement these objective measurements with subjective assessment by performing multiple crowdsourcing QoE studies. For this purpose, we developed CWeQS, a customized framework to emulate the entire web page loading process, and ask users for their experience while controlling the Core Web Vitals, which is available to the public. To properly configure CWeQS for the planned QoE study and the crowdsourcing setup, we conduct pre-studies, in which we evaluate the importance of the loading strategy of a web page and the importance of the user task. The obtained insights allow us to conduct the desired QoE studies for each of the Core Web Vitals. Furthermore, we assess the impact of cookie consent banners, which have become ubiquitous due to regulatory demands, on the Core Web Vitals and investigate their influence on web QoE. Our results suggest that the Core Web Vitals are much less predictive for web QoE than expected and that page loading times remain the main metric and influence factor in this context. We further observe that unobtrusive and acentric cookie consent banners are preferred by end-users and that additional delays caused by interacting with consent banners in order to agree to or reject cookies should be accounted along with the actual page load time to reduce waiting times and thus to improve web QoE
- …